1,005 research outputs found
Paper Session II-C - Decreasing the cost of Spacecraft Processing
This paper focuses on spacecraft and payload design for efficient and low-cost launch site processing. An important assumption here is the use of a modern processing facility, one that is optimized for satellites designed for low-cost operations at the launch site. A typical spacecraft processing flow includes operations in three different areas: non-hazardous, hazardous, and on-pad checkout activities. Smart design practices in each of these areas can reduce the time span for payload processing and thus lower costs throughout the operation. A state-ofthe- art processing facility requires a large expenditure of funds up front, but will recoup those dollars over the useful life of the facility and the satellites processing through it. It is the purpose of this paper to propose a series of design requirements for any satellite application, be it a series intended as part of a constellation (e.g. GPS) or a one-time mission (e.g. Cassini). New spacecraft designers must be able to think past the boundaries of the satellite bus and its payload and determine how the system will be readied for launch at the launch base. This requires a systems approach to design, one based on a rigorous requirements definition process and regular interaction with the engineers who will perform the processing operations prior to launch. Most engineers are not schooled in designing for ease of processing - they are taught how to create a satellite to perform a certain mission. This paper aims to highlight efficient and simple launch site processing activities. While not serving as an all-inclusive checklist for smart processing, this paper tries to get the new spacecraft engineer to start thinking about streamlining launch base activities
A One Step collision Detection Method for Computer Graphics Programs
The topic of this thesis is a collection detection algorithm for use in computer programs dealing in three dimensional graphics. Collision detection is usually accomplished by breaking the movement into small steps and checking if a collision has occurred at each of these discrete steps. This method is a very time-intensive way to detect for collisions and therefore, inefficient for a system which typically moves in large increments. For this kind of system, a method could be developed which checks once for collisions without dividing the move into multiple small increments. The subject of this paper is an algorithm, developed for use in a computer program, that will allow the user to make large movements of the objects and check for collisions quickly and efficiently
Axon diameter measurements using diffusion MRI are infeasible
The feasibility of non-invasive axonal diameter quantification with diffusion MRI is a strongly debated topic due to the neuroscientific potential of such information and its relevance for the axonal signal transmission speed. It has been shown that under ideal conditions, the minimal diameter producing detectable signal decay is bigger than most human axons in the brain, even using the strongest currently available MRI systems. We show that resolving the simplest situations including multiple diameters is unfeasible even with diameters much bigger than the diameter limit. Additionally, the recently proposed effective diameter resulting from fitting a single value over a distribution is almost exclusively influenced by the biggest axons. We show how impractical this metric is for comparing different distributions. Overall, axon diameters currently cannot be quantified by diffusion MRI in any relevant way
Test of the Conserved Vector Current Hypothesis by beta-ray Angular Distribution Measurement in the Mass-8 System
The beta-ray angular correlations for the spin alignments of 8Li and 8B have
been observed in order to test the conserved vector current (CVC) hypothesis.
The alignment correlation terms were combined with the known beta-alpha-angular
correlation terms to determine all the matrix elements contributing to the
correlation terms. The weak magnetism term, 7.5\pm0.2, deduced from the
beta-ray correlation terms was consistent with the CVC prediction 7.3\pm0.2,
deduced from the analog-gamma-decay measurement based on the CVC hypothesis.
However, there was no consistent CVC prediction for the second-forbidden term
associated with the weak vector current. The experimental value for the
second-forbidden term was 1.0 \pm 0.3, while the CVC prediction was 0.1 \pm 0.4
or 2.1 \pm 0.5.Comment: 31 pages, 12 figures, Accepted for publication in Phys. Rev.
Recommended from our members
Interactive presentation of geo-spatial climate data in multi-display environments
Competition for inorganic and organic forms of nitrogen and phosphorous between phytoplankton and bacteria during an <i>Emiliania huxleyi</i> spring bloom
Using <sup>15</sup>N and <sup>33</sup>P, we measured the turnover of organic and inorganic nitrogen (N) and phosphorus (P) substrates, and the partitioning of N and P from these sources into two size fractions of marine osmotrophs during the course of a phytoplankton bloom in a nutrient manipulated mesocosm. The larger size fraction (>0.8 μm), mainly consisting of the coccolithophorid <i>Emiliania huxleyi</i>, but also including an increasing amount of large particle-associated bacteria as the bloom proceeded, dominated uptake of the inorganic forms NH<sub>4</sub><sup>+</sup>, NO<sub>3</sub><sup>−</sup>, and PO<sub>4</sub><sup>3−</sup>. The uptake of N from leucine, and P from ATP and dissolved DNA, was initially dominated by the 0.8–0.2 μm size fraction, but shifted towards dominance by the >0.8 μm size fraction as the system turned to an increasing degree of N-deficiency. Normalizing uptake to biomass of phytoplankton and heterotrophic bacteria revealed that organisms in the 0.8–0.2 μm size fraction had higher specific affinity for leucine-N than those in the >0.8 μm size fraction when N was deficient, whereas the opposite was the case for NH<sub>4</sub><sup>+</sup>. There was no such difference regarding the specific affinity for P substrates. Since heterotrophic bacteria seem to acquire N from organic compounds like leucine more efficiently than phytoplankton, our results suggest different structuring of the microbial food chain in N-limited relative to P-limited environments
LUX -- A Laser-Plasma Driven Undulator Beamline
The LUX beamline is a novel type of laser-plasma accelerator. Building on the
joint expertise of the University of Hamburg and DESY the beamline was
carefully designed to combine state-of-the-art expertise in laser-plasma
acceleration with the latest advances in accelerator technology and beam
diagnostics. LUX introduces a paradigm change moving from single-shot
demonstration experiments towards available, stable and controllable
accelerator operation. Here, we discuss the general design concepts of LUX and
present first critical milestones that have recently been achieved, including
the generation of electron beams at the repetition rate of up to 5 Hz with
energies above 600 MeV and the generation of spontaneous undulator radiation at
a wavelength well below 9 nm.Comment: submitte
Design considerations for table-top, laser-based VUV and X-ray free electron lasers
A recent breakthrough in laser-plasma accelerators, based upon ultrashort
high-intensity lasers, demonstrated the generation of quasi-monoenergetic
GeV-electrons. With future Petawatt lasers ultra-high beam currents of ~100 kA
in ~10 fs can be expected, allowing for drastic reduction in the undulator
length of free-electron-lasers (FELs). We present a discussion of the key
aspects of a table-top FEL design, including energy loss and chirps induced by
space-charge and wakefields. These effects become important for an optimized
table-top FEL operation. A first proof-of-principle VUV case is considered as
well as a table-top X-ray-FEL which may open a brilliant light source also for
new ways in clinical diagnostics.Comment: 6 pages, 4 figures; accepted for publication in Appl. Phys.
- …